How AI agents reshape industrial automation and risk management
In this Help Net Security interview, Michael Metzler, Vice President Horizontal Management Cybersecurity for Digital Industries at Siemens, discusses the cybersecurity implications of deploying AI agents in industrial environments. He talks about the risks that come with AI agents making semi-autonomous decisions, and why a layered security approach like Defense-in-Depth is key to keeping industrial systems safe.
What are the implications of an AI agent being compromised in a critical infrastructure environment, such as an energy plant or manufacturing line? What new cybersecurity risks do AI agents introduce into industrial environments compared to traditional automation systems?
AI agents represent an important advancement in industrial automation, offering new capabilities through their semi-autonomous decision-making features. As with any technological advancement, successful implementation requires thoughtful integration with existing industrial safety and security standards and protocols.
A robust security framework for AI agents builds upon established industrial security principles while incorporating specific measures for autonomous systems. This includes comprehensive authentication, authorization and communication protocols, ensuring agents operate within clearly defined operational guardrails. Well-implemented security measures such as continuous verification, appropriate access controls, and behavioral analytics enable organizations to effectively utilize AI capabilities while maintaining operational safety and security.
The key to successful AI agent deployment lies in the balanced integration of innovative automation capabilities with proven industrial safety systems. This approach allows organizations to enhance their operations through AI while ensuring that fundamental safety and security requirements continue to be met.
Many AI agents operate semi-autonomously or make decisions based on real-time data. How can organizations ensure these decisions don’t inadvertently violate safety or security protocols?
In industrial environments, AI agents are tools designed to augment, not replace human capabilities and oversight. This concept is known as “human-in-the-loop.” These agents operate within carefully defined guardrails and multiple security layers to ensure that industrial environments remain protected while benefiting from advanced automation capabilities. They operate as a coordinated ensemble, managed through centralized orchestration. The orchestrator agent maintains comprehensive awareness of all individual agents’ capabilities.
This software layer handles agent management, activating or deactivating them based on specific task requirements. What is important: Users maintain control through an interface that enables them to selectively deploy specific agents and capabilities according to their operational needs. Additionally, extensive testing of prompts and responses should be conducted using LLM test frameworks. This will help identify potential attack scenarios before shipment.
Furthermore, secure production environments require close collaboration between IT departments and production personnel. A key to success is interdisciplinary teams where IT and OT professionals combine their expertise and learn from each other. Only through this collaboration can a comprehensive security approach be developed. A joint security strategy is crucial to consider both areas’ priorities. For instance, patch management and access controls must meet both IT security requirements and OT’s special operating conditions. Such strategies help align different priorities – availability in OT and confidentiality in IT.
What cybersecurity controls or governance frameworks do you recommend for organizations deploying AI agents in OT environments?
A proven approach to securing OT systems is the “Defense-in-Depth” concept, pursued by companies like Siemens. This multi-layered security approach creates multiple defense levels in the production environment and is based on the international industrial standard series IEC 62443, considered the leading standard for industrial cybersecurity. The deployment of AI agents maintains the integrity of this approach.
The Defense-in-Depth concept considers all essential security factors, including physical access protection for manufacturing sites, organizational and technical measures to protect production networks and control systems from unauthorized access, espionage, and manipulation. This concept is complemented by zero-trust principles focusing on verifying and authorizing communicating units.
What advice would you give to CISOs or plant managers who are under pressure to adopt AI tools quickly but are concerned about security risks?
To effectively improve OT security, plant operators should first conduct a comprehensive security assessment of their production environment. Such regular assessments help identify potential vulnerabilities. These assessments can identify critical components and devices in plants that require particular protection, allowing targeted measures to ensure their security. To implement the Defense-in-Depth concept, various technologies and tools are deployed based on assessment results.
Since OT systems often operate for many years without major changes, available updates and patches must be implemented regularly to close detected vulnerabilities. However, such measures must not interrupt production unexpectedly, requiring careful planning and coordination. Moreover, the human factor is often a gateway for cyberattacks, making regular employee training essential to raise awareness of security risks and implement OT security requirements in daily operations.
When implementing AI tools, plant managers should start with a thorough security assessment and then implement AI agents in phases beginning with non-critical processes. Following that, clear governance frameworks should be established aligned with existing security protocols. Maintaining regular security audits and updates is crucial as well as investing in staff training and security awareness. The key to successful AI integration lies in treating security not as an afterthought but as a fundamental design principle.